Skip to content

Conversation

@manuelcandales
Copy link
Contributor

@manuelcandales manuelcandales commented Oct 8, 2024

Stack from ghstack (oldest at bottom):

@swolchok's technique is superior to the TensorReader/TensorWriter approach I introduced in D63703174. So, I am rewriting my build size reduction stack on top of his approach.

Superior how?

  • It should lead to smaller overall build size. Current measurements indicate this. Complete data will be published after the stack is complete.
  • It is better suited for dtype selective build, since it passes the op name to all of the ET_SWITCHes involved.
  • It is more performant. Current measurements to clamp.Tensor_out indicate this. Note that in the data below, my stack is marginally more performant for the vanilla case (no broadcast & all dtypes equal), but this is only because I added a "fast path" in my code for such vanilla case, which can be trivially added to Scott's approach as well. It is more relevant to compare numbers for mixed dtype or broadcasting.
Baseline

clamp.Tensor_out no broadcast float: 25451 [23423 - 28839] microseconds
clamp.Tensor_out no broadcast double: 25461 [23377 - 50940] microseconds
clamp.Tensor_out no broadcast mixed dtype: 23367 [21353 - 27022] microseconds
clamp.Tensor_out broadcast: 702529 [679667 - 742005] microseconds

Manuel C

clamp.Tensor_out no broadcast float: 22919 [21333 - 27140] microseconds
clamp.Tensor_out no broadcast double: 23095 [21472 - 27462] microseconds
clamp.Tensor_out no broadcast mixed dtype: 35042 [32875 - 42491] microseconds
clamp.Tensor_out broadcast: 936541 [916437 - 971499] microseconds

Scott W

clamp.Tensor_out no broadcast float: 28263 [26458 - 32832] microseconds
clamp.Tensor_out no broadcast double: 27442 [25548 - 39417] microseconds
clamp.Tensor_out no broadcast mixed dtype: 25592 [23620 - 30148] microseconds
clamp.Tensor_out broadcast: 695399 [674244 - 738919] microseconds

Build size reduction after Scott's diffs touching clamp.Tensor_out and where.self_out:

  • clamp: 7.42 MB -> 119 KB
  • where: 106 KB -> 16 KB

Differential Revision: D63838072

…or_elementwise_fn

@swolchok's technique is superior to the TensorReader/TensorWriter approach I introduced in D63703174. So, I am rewriting my build size reduction stack on top of his approach.

Superior how?
- It should lead to smaller overall build size. Current measurements indicate this. Complete data will be published after the stack is complete.
- It is better suited for dtype selective build, since it passes the op name to all of the ET_SWITCHes involved.
- It is more performant. Current measurements to clamp.Tensor_out indicate this. Note that in the data below, my stack is marginally more performant for the vanilla case (no broadcast & all dtypes equal), but this is only because I added a "fast path" in my code for such vanilla case, which can be trivially added to Scott's approach as well. It is more relevant to compare numbers for mixed dtype or broadcasting.
```
Baseline

clamp.Tensor_out no broadcast float: 25451 [23423 - 28839] microseconds
clamp.Tensor_out no broadcast double: 25461 [23377 - 50940] microseconds
clamp.Tensor_out no broadcast mixed dtype: 23367 [21353 - 27022] microseconds
clamp.Tensor_out broadcast: 702529 [679667 - 742005] microseconds

Manuel C

clamp.Tensor_out no broadcast float: 22919 [21333 - 27140] microseconds
clamp.Tensor_out no broadcast double: 23095 [21472 - 27462] microseconds
clamp.Tensor_out no broadcast mixed dtype: 35042 [32875 - 42491] microseconds
clamp.Tensor_out broadcast: 936541 [916437 - 971499] microseconds

Scott W

clamp.Tensor_out no broadcast float: 28263 [26458 - 32832] microseconds
clamp.Tensor_out no broadcast double: 27442 [25548 - 39417] microseconds
clamp.Tensor_out no broadcast mixed dtype: 25592 [23620 - 30148] microseconds
clamp.Tensor_out broadcast: 695399 [674244 - 738919] microseconds
```

Build size reduction after Scott's diffs touching clamp.Tensor_out and where.self_out:
- clamp: 7.42 MB -> 119 KB
- where: 106 KB -> 16 KB

Differential Revision: [D63838072](https://our.internmc.facebook.com/intern/diff/D63838072/)

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6005

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 2b59ac2 with merge base cb12061 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 8, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63838072

…ply_tritensor_elementwise_fn"

swolchok's technique is superior to the TensorReader/TensorWriter approach I introduced in D63703174. So, I am rewriting my build size reduction stack on top of his approach.

Superior how?
- It should lead to smaller overall build size. Current measurements indicate this. Complete data will be published after the stack is complete.
- It is better suited for dtype selective build, since it passes the op name to all of the ET_SWITCHes involved.
- It is more performant. Current measurements to clamp.Tensor_out indicate this. Note that in the data below, my stack is marginally more performant for the vanilla case (no broadcast & all dtypes equal), but this is only because I added a "fast path" in my code for such vanilla case, which can be trivially added to Scott's approach as well. It is more relevant to compare numbers for mixed dtype or broadcasting.
```
Baseline

clamp.Tensor_out no broadcast float: 25451 [23423 - 28839] microseconds
clamp.Tensor_out no broadcast double: 25461 [23377 - 50940] microseconds
clamp.Tensor_out no broadcast mixed dtype: 23367 [21353 - 27022] microseconds
clamp.Tensor_out broadcast: 702529 [679667 - 742005] microseconds

Manuel C

clamp.Tensor_out no broadcast float: 22919 [21333 - 27140] microseconds
clamp.Tensor_out no broadcast double: 23095 [21472 - 27462] microseconds
clamp.Tensor_out no broadcast mixed dtype: 35042 [32875 - 42491] microseconds
clamp.Tensor_out broadcast: 936541 [916437 - 971499] microseconds

Scott W

clamp.Tensor_out no broadcast float: 28263 [26458 - 32832] microseconds
clamp.Tensor_out no broadcast double: 27442 [25548 - 39417] microseconds
clamp.Tensor_out no broadcast mixed dtype: 25592 [23620 - 30148] microseconds
clamp.Tensor_out broadcast: 695399 [674244 - 738919] microseconds
```

Build size reduction after Scott's diffs touching clamp.Tensor_out and where.self_out:
- clamp: 7.42 MB -> 119 KB
- where: 106 KB -> 16 KB

Differential Revision: [D63838072](https://our.internmc.facebook.com/intern/diff/D63838072/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63838072

Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, though being more verbose is imo a good code trait...

…ply_tritensor_elementwise_fn"

swolchok's technique is superior to the TensorReader/TensorWriter approach I introduced in D63703174. So, I am rewriting my build size reduction stack on top of his approach.

Superior how?
- It should lead to smaller overall build size. Current measurements indicate this. Complete data will be published after the stack is complete.
- It is better suited for dtype selective build, since it passes the op name to all of the ET_SWITCHes involved.
- It is more performant. Current measurements to clamp.Tensor_out indicate this. Note that in the data below, my stack is marginally more performant for the vanilla case (no broadcast & all dtypes equal), but this is only because I added a "fast path" in my code for such vanilla case, which can be trivially added to Scott's approach as well. It is more relevant to compare numbers for mixed dtype or broadcasting.
```
Baseline

clamp.Tensor_out no broadcast float: 25451 [23423 - 28839] microseconds
clamp.Tensor_out no broadcast double: 25461 [23377 - 50940] microseconds
clamp.Tensor_out no broadcast mixed dtype: 23367 [21353 - 27022] microseconds
clamp.Tensor_out broadcast: 702529 [679667 - 742005] microseconds

Manuel C

clamp.Tensor_out no broadcast float: 22919 [21333 - 27140] microseconds
clamp.Tensor_out no broadcast double: 23095 [21472 - 27462] microseconds
clamp.Tensor_out no broadcast mixed dtype: 35042 [32875 - 42491] microseconds
clamp.Tensor_out broadcast: 936541 [916437 - 971499] microseconds

Scott W

clamp.Tensor_out no broadcast float: 28263 [26458 - 32832] microseconds
clamp.Tensor_out no broadcast double: 27442 [25548 - 39417] microseconds
clamp.Tensor_out no broadcast mixed dtype: 25592 [23620 - 30148] microseconds
clamp.Tensor_out broadcast: 695399 [674244 - 738919] microseconds
```

Build size reduction after Scott's diffs touching clamp.Tensor_out and where.self_out:
- clamp: 7.42 MB -> 119 KB
- where: 106 KB -> 16 KB

Differential Revision: [D63838072](https://our.internmc.facebook.com/intern/diff/D63838072/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63838072

…ply_tritensor_elementwise_fn"

swolchok's technique is superior to the TensorReader/TensorWriter approach I introduced in D63703174. So, I am rewriting my build size reduction stack on top of his approach.

Superior how?
- It should lead to smaller overall build size. Current measurements indicate this. Complete data will be published after the stack is complete.
- It is better suited for dtype selective build, since it passes the op name to all of the ET_SWITCHes involved.
- It is more performant. Current measurements to clamp.Tensor_out indicate this. Note that in the data below, my stack is marginally more performant for the vanilla case (no broadcast & all dtypes equal), but this is only because I added a "fast path" in my code for such vanilla case, which can be trivially added to Scott's approach as well. It is more relevant to compare numbers for mixed dtype or broadcasting.
```
Baseline

clamp.Tensor_out no broadcast float: 25451 [23423 - 28839] microseconds
clamp.Tensor_out no broadcast double: 25461 [23377 - 50940] microseconds
clamp.Tensor_out no broadcast mixed dtype: 23367 [21353 - 27022] microseconds
clamp.Tensor_out broadcast: 702529 [679667 - 742005] microseconds

Manuel C

clamp.Tensor_out no broadcast float: 22919 [21333 - 27140] microseconds
clamp.Tensor_out no broadcast double: 23095 [21472 - 27462] microseconds
clamp.Tensor_out no broadcast mixed dtype: 35042 [32875 - 42491] microseconds
clamp.Tensor_out broadcast: 936541 [916437 - 971499] microseconds

Scott W

clamp.Tensor_out no broadcast float: 28263 [26458 - 32832] microseconds
clamp.Tensor_out no broadcast double: 27442 [25548 - 39417] microseconds
clamp.Tensor_out no broadcast mixed dtype: 25592 [23620 - 30148] microseconds
clamp.Tensor_out broadcast: 695399 [674244 - 738919] microseconds
```

Build size reduction after Scott's diffs touching clamp.Tensor_out and where.self_out:
- clamp: 7.42 MB -> 119 KB
- where: 106 KB -> 16 KB

Differential Revision: [D63838072](https://our.internmc.facebook.com/intern/diff/D63838072/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63838072

…ply_tritensor_elementwise_fn"

swolchok's technique is superior to the TensorReader/TensorWriter approach I introduced in D63703174. So, I am rewriting my build size reduction stack on top of his approach.

Superior how?
- It should lead to smaller overall build size. Current measurements indicate this. Complete data will be published after the stack is complete.
- It is better suited for dtype selective build, since it passes the op name to all of the ET_SWITCHes involved.
- It is more performant. Current measurements to clamp.Tensor_out indicate this. Note that in the data below, my stack is marginally more performant for the vanilla case (no broadcast & all dtypes equal), but this is only because I added a "fast path" in my code for such vanilla case, which can be trivially added to Scott's approach as well. It is more relevant to compare numbers for mixed dtype or broadcasting.
```
Baseline

clamp.Tensor_out no broadcast float: 25451 [23423 - 28839] microseconds
clamp.Tensor_out no broadcast double: 25461 [23377 - 50940] microseconds
clamp.Tensor_out no broadcast mixed dtype: 23367 [21353 - 27022] microseconds
clamp.Tensor_out broadcast: 702529 [679667 - 742005] microseconds

Manuel C

clamp.Tensor_out no broadcast float: 22919 [21333 - 27140] microseconds
clamp.Tensor_out no broadcast double: 23095 [21472 - 27462] microseconds
clamp.Tensor_out no broadcast mixed dtype: 35042 [32875 - 42491] microseconds
clamp.Tensor_out broadcast: 936541 [916437 - 971499] microseconds

Scott W

clamp.Tensor_out no broadcast float: 28263 [26458 - 32832] microseconds
clamp.Tensor_out no broadcast double: 27442 [25548 - 39417] microseconds
clamp.Tensor_out no broadcast mixed dtype: 25592 [23620 - 30148] microseconds
clamp.Tensor_out broadcast: 695399 [674244 - 738919] microseconds
```

Build size reduction after Scott's diffs touching clamp.Tensor_out and where.self_out:
- clamp: 7.42 MB -> 119 KB
- where: 106 KB -> 16 KB

Differential Revision: [D63838072](https://our.internmc.facebook.com/intern/diff/D63838072/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D63838072

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in a79caab.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants